gaussian differential privacy
Gaussian Differential Privacy on Riemannian Manifolds
We develop an advanced approach for extending Gaussian Differential Privacy (GDP) to general Riemannian manifolds. The concept of GDP stands out as a prominent privacy definition that strongly warrants extension to manifold settings, due to its central limit properties. By harnessing the power of the renowned Bishop-Gromov theorem in geometric analysis, we propose a Riemannian Gaussian distribution that integrates the Riemannian distance, allowing us to achieve GDP in Riemannian manifolds with bounded Ricci curvature. To the best of our knowledge, this work marks the first instance of extending the GDP framework to accommodate general Riemannian manifolds, encompassing curved spaces, and circumventing the reliance on tangent space summaries. We provide a simple algorithm to evaluate the privacy budget $\mu$ on any one-dimensional manifold and introduce a versatile Markov Chain Monte Carlo (MCMC)-based algorithm to calculate $\mu$ on any Riemannian manifold with constant curvature. Through simulations on one of the most prevalent manifolds in statistics, the unit sphere $S^d$, we demonstrate the superior utility of our Riemannian Gaussian mechanism in comparison to the previously proposed Riemannian Laplace mechanism for implementing GDP.
Gaussian Differential Privacy on Riemannian Manifolds
We develop an advanced approach for extending Gaussian Differential Privacy (GDP) to general Riemannian manifolds. The concept of GDP stands out as a prominent privacy definition that strongly warrants extension to manifold settings, due to its central limit properties. By harnessing the power of the renowned Bishop-Gromov theorem in geometric analysis, we propose a Riemannian Gaussian distribution that integrates the Riemannian distance, allowing us to achieve GDP in Riemannian manifolds with bounded Ricci curvature. To the best of our knowledge, this work marks the first instance of extending the GDP framework to accommodate general Riemannian manifolds, encompassing curved spaces, and circumventing the reliance on tangent space summaries. We provide a simple algorithm to evaluate the privacy budget \mu on any one-dimensional manifold and introduce a versatile Markov Chain Monte Carlo (MCMC)-based algorithm to calculate \mu on any Riemannian manifold with constant curvature.
Fully Adaptive Composition for Gaussian Differential Privacy
Smith, Adam, Thakurta, Abhradeep
We show that Gaussian Differential Privacy, a variant of differential privacy tailored to the analysis of Gaussian noise addition, composes gracefully even in the presence of a fully adaptive analyst. Such an analyst selects mechanisms (to be run on a sensitive data set) and their privacy budgets adaptively, that is, based on the answers from other mechanisms run previously on the same data set. In the language of Rogers, Roth, Ullman and Vadhan, this gives a filter for GDP with the same parameters as for nonadaptive composition. One can also characterize GDP, via Blackwell's theorem, in terms of bounds on the ROC curve for all tests that aim to distinguish M(D) from M(D We prove that Gaussian Differential Privacy (GDP) [DRS19] composes even when the analyst is fully adaptive; the privacy parameters adds up in exactly the same way as in nonadaptive composition. In the language of [RRUV16], there is a filter for this privacy concept that simply sums the squares of privacy budgets and compares them to a threshold. The analyst's choices at each stage can depend arbitrarily on the interaction up to that point.
Rejoinder: Gaussian Differential Privacy
Dong, Jinshuo, Roth, Aaron, Su, Weijie J.
We warmly thank Editor Paul Smith for selecting our paper for discussion and are extremely grateful to all the discussants for taking their valuable time to provide engaging and stimulating feedback on our work. These insights situate our work in context and provide promising directions for future research. We are excited to see that thoughts about theoretical complements and new applications are already emerging. A general view, shared by all discussants, is that privacy is a first-order concern in many data science problems. We are very pleased to learn that our statistics community welcomes new foundational development and methodological contributions that allow for privacy protections in statistical data analysis.
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.98)
Federated $f$-Differential Privacy
Zheng, Qinqing, Chen, Shuxiao, Long, Qi, Su, Weijie J.
Unlike traditional distributed training approaches that upload all the data to central servers, federated learning performs ondevice training and only some summaries of local data or local models are exchanged among clients. Typically, the clients upload their local models to the server and share the global averaging in a repeated manner. This offers plausible solutions to address the critical data privacy issue: sensitive information about individuals such as typing history, shopping transactions, geographical locations, medical records, would stay localized. Nonetheless, a malicious client who participates in the federated learning might still be able to learn information about the other clients' data through the shared model's weights. This is because it is possible for an adversary to learn about or even identify certain individuals by simply tweaking the input datasets and probing the output of the algorithm [FJR15, SSSS17]. This gives rise to a pressing call for privacy-preserving federated learning algorithms. Accordingly, we urgently need a rigorous and principled framework to enhance data privacy, and to quantitatively answer the important questions: Can another client identify the presence or absence of any individual record in my data in federated learning? Worse, what if all the other clients ally each other to attack my data?
- North America > United States > Virginia (0.04)
- North America > United States > Pennsylvania (0.04)